Improving Semi-Supervised Target Alignment via Label-Aware Base Kernels
نویسندگان
چکیده
Semi-supervised kernel design is an essential step for obtaining good predictive performance in semi-supervised learning tasks. In the current literatures, a large family of algorithms builds the new kernel by using the weighted average of predefined base kernels. While optimal weighting schemes have been studied extensively, the choice of base kernels received much less attention. Many methods simply adopt the empirical kernel matrices or its eigenvectors. Such base kernels are computed irrespective of class labels and may not always reflect useful structures in the data. As a result, in case of poor base kernels, the generalization performance can be degraded however hard their weights are tuned. In this paper, we propose to construct high-quality base kernels with the help of label information to globally improve the final target alignment. In particular, we devise label-aware kernel eigenvectors under the framework of semi-supervised eigenfunction extrapolation, which span base kernels that are more useful for learning. Such base kernels are individually better aligned to the learning target, so their mixture will more likely generate a good classifier. Our approach is computationally efficient, and demonstrates encouraging performance in semisupervised classification and regression.
منابع مشابه
Image alignment via kernelized feature learning
Machine learning is an application of artificial intelligence that is able to automatically learn and improve from experience without being explicitly programmed. The primary assumption for most of the machine learning algorithms is that the training set (source domain) and the test set (target domain) follow from the same probability distribution. However, in most of the real-world application...
متن کاملData dependent kernels in nearly-linear time
We propose a method to efficiently construct data-dependent kernels which can make use of large quantities of (unlabeled) data. Our construction makes an approximation in the standard construction of semi-supervised kernels in Sindhwani et al. (2005). In typical cases these kernels can be computed in nearly-linear time (in the amount of data), improving on the cubic time of the standard constru...
متن کاملCross-Domain Kernel Induction for Transfer Learning
The key question in transfer learning (TL) research is how to make model induction transferable across different domains. Common methods so far require source and target domains to have a shared/homogeneous feature space, or the projection of features from heterogeneous domains onto a shared space. This paper proposes a novel framework, which does not require a shared feature space but instead ...
متن کاملSemi-Supervised Matrix Completion for Cross-Lingual Text Classification
Cross-lingual text classification is the task of assigning labels to observed documents in a label-scarce target language domain by using a prediction model trained with labeled documents from a label-rich source language domain. Cross-lingual text classification is popularly studied in natural language processing area to reduce the expensive manual annotation effort required in the target lang...
متن کاملSemi-Supervised Block ITG Models for Word Alignment
Labeled training data for the word alignment task, in the form of word-aligned sentence pairs, is hard to come by for many language-pairs. Hence, it is natural to draw upon semi-supervised learning methods (Fraser and Marcu, 2006). We introduce a semisupervised learning method for word alignment using conditional entropy regularization (Grandvalet and Bengio, 2005) on top of a BITG-based discri...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014